60
Algorithms for Binary Neural Networks
FIGURE 3.16
Training and testing curves of PCNN-22 when λ=0 and 1e −4, which shows that the
projection affects little on the convergence.
3.6
RBCN: Rectified Binary Convolutional Networks with Gener-
ative Adversarial Learning
Quantization approaches represent network weights and activations with fixed-point integers
of low bit width, allowing computation with efficient bitwise operations. Binarization [199,
159] is an extreme quantization approach where both weights and activations are +1 or −1,
represented by a single bit. This chapter designs highly compact binary neural networks
(BNNs) from the perspective of quantization and network pruning.
FIGURE 3.17
Illustration of binary kernels Dl
i (first row), feature maps produced by Dl
i (second row),
and corresponding feature maps after binarization (third row) when J=4. This confirms
the diversity in PCNNs.